Ben Lorica and Evangelos Simoudis on AI Regulation Challenges, China’s Engineering Advantage, and Enterprise AI Reality Check.
Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon • RSS.
Ben Lorica and Evangelos Simoudis discuss three critical topics for AI practitioners: the complexities of AI regulation, with debates over foundation model oversight versus domain-specific regulations and concerns about IP protection in AI training data.
Interview highlights – key sections from the video version:
Related content:
- A video version of this conversation is available on our YouTube channel.
- Dan Wang’s book: Breakneck — China’s Quest to Engineer the Future
- If Americans Are Lawyers and the Chinese Are Engineers, Who Is Going to Win?
- A Troubled Man and His Chatbot
- Women are using AI therapy to work through breakups and jealousy. What could go wrong?
- Playing the Field with My A.I. Boyfriends
- AI startup Anthropic agrees to pay $1.5bn to settle book piracy lawsuit
- My mom and Dr. DeepSeek
- MIT Survey: The State of AI in Business 2025
- Ben Lorica and Evangelos Simoudis → When AI Eats the Bottom Rung of the Career Ladder
- Evangelos Simoudis’ Blog
- US vs. China: Who Wins the Critical AI Diffusion Battle?
- SB 1047 Unpacked
- Ben Lorica and David Talby → 2025 AI Governance Survey
Support our work by subscribing to our newsletter📩
Transcript
Below is a heavily edited excerpt, in Question & Answer format.
AI Regulation and Governance
What’s the current state of AI regulation in the U.S., and why should AI practitioners be concerned about the fragmented approach?
The U.S. currently has a fragmented, “free for all” regulatory landscape where different states are creating their own rules. This approach, similar to what we’ve seen with autonomous vehicles, slows down adoption and creates significant compliance challenges for development teams. Companies often end up building for the most stringent state’s regulations to ensure nationwide coverage, which is an inefficient and suboptimal solution.
Should regulation focus on foundation models themselves or their applications?
A practical approach should address both how models are trained and how they’re used. Trying to regulate “foundation models” as a static category is difficult because the technology’s capabilities change every six months. It is more effective to distinguish between two usage patterns: 1) models embedded in domain-specific applications (like healthcare), which can be overseen by existing sector regulators (like the FDA), and 2) direct model use through general-purpose chatbots that cut across domains.
What are the biggest IP and legal risks for teams using foundation models in production?
There are two primary IP risks. First, consumption risk: the model you’re using may have been trained on copyrighted or proprietary data without proper rights, creating downstream legal exposure (as seen in lawsuits like The New York Times vs. OpenAI). Second, creation risk: AI-generated code or content might inadvertently infringe on someone else’s IP or introduce security vulnerabilities into your enterprise systems. Practical mitigations include: contractually verified training-data rights from vendors, IP indemnities, code provenance scanning in CI/CD pipelines, and maintaining detailed logs for traceability.
How should enterprises address employee AI usage and literacy concerns?
Enterprises are discovering that employees are already using public AI tools, with convenience often trumping careful evaluation of the outputs. This creates risks around data security, accuracy, and IP. Companies need comprehensive AI literacy programs that teach employees to assess response quality, understand appropriate use cases, and recognize the technology’s limitations. These programs should be paired with clear, written AI use policies and internal tools that guide users toward more responsible behavior.
China’s Engineering-Led AI Strategy
How does China’s engineering-led approach differ from the U.S. system, and what does this mean for AI competition?
China operates as an “engineering-led” system where leadership predominantly comes from engineering backgrounds, while the U.S. is more “lawyer-led.” This creates a fundamental difference in national strategy. China views the country as a large-scale system to be optimized; when AI becomes a priority, they can rapidly mobilize national resources, such as building more nuclear power stations than the rest of the world combined to ensure an adequate energy supply for AI workloads.
What is “process knowledge” and why is it crucial for successful AI implementation?
Process knowledge goes beyond having the models (tools) or algorithms (recipes); it’s the deep, tacit understanding of how to implement and scale technology within a specific organizational context. While the U.S. excels at invention, China has prioritized mastering the entire process, from innovation to mass manufacturing and diffusion. For AI practitioners, this means a successful implementation isn’t just about having the best model—it’s about embedding it into repeatable, scalable processes to achieve reliable outcomes.
How do timeline and funding differences affect the competitive landscape?
Chinese state-backed initiatives operate on much longer timelines compared to U.S. corporations focused on satisfying shareholders with quarterly results. The Chinese government supports hyper-competition by funding multiple players simultaneously—for example, in the auto industry—knowing that while only a few will survive, the entire sector innovates at an accelerated pace. This long-term view enables massive infrastructure investments that are critical for scaling AI.
How are U.S. export controls affecting China’s semiconductor and AI strategy?
China is treating the U.S. export controls on advanced semiconductors as a “Sputnik moment” to catalyze the development of its domestic chip capabilities. The strategy is not to just accept older-generation chips but to use the restrictions as motivation to rally the country to build its entire semiconductor stack domestically. This is accelerating “sovereign AI” trends and hardware diversification, forcing Chinese companies to adopt and improve their own domestic hardware.
What key governance difference should global AI teams understand?
China’s governance model prioritizes commercial success, technology diffusion, and social stability over the individual privacy protections emphasized in the West. While China has data privacy regulations, they notably exempt government use, enabling large-scale data collection and rapid deployment of technologies like facial recognition. This governance approach is also being exported to developing countries, meaning global product teams must manage different regional expectations and compliance requirements.
Enterprise AI Implementation Reality
How should teams interpret reports of high AI project failure rates?
The high failure rates reported in surveys reflect a natural “digestion phase” that occurs with any new, transformative technology, not a fundamental failure of AI itself. Many initial, hype-driven pilot projects have concluded, and enterprises are now realistically evaluating costs, ROI, and the operational changes required for production. This pause is a normal and necessary step; the key is to proceed with a more disciplined strategy, because competitors and employees are already using the technology.
What’s actually causing AI projects to struggle during the pilot-to-production transition?
Projects often struggle because of a mismatch between a successful pilot and the reality of production. Common causes include: unrealistic expectations about current AI capabilities (“we haven’t found the value that was advertised”), an insufficient understanding of the total implementation cost (including personnel and infrastructure), and underestimating production-grade requirements for reliability, security, and compliance.
How should teams approach the “build vs. buy” decision for AI implementations?
Companies should seriously consider partnering with specialized startups rather than building everything internally, especially for complex problem domains where startups have developed focused expertise. However, any external solution will likely require significant customization to work with your existing processes. The key is to find vendors with demonstrated traction solving similar problems in other enterprise environments.
What are the critical capabilities every enterprise should establish before scaling AI implementations?
Two foundational areas are essential. First, comprehensive AI literacy programs that extend beyond technical teams to all employees, teaching them to assess response quality and use tools responsibly. Second, a full understanding of AI costs—not just computational expenses, but also personnel requirements, integration costs, operational overhead, and how to calculate a realistic ROI.
